Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many state-of-the-art computational methods have been proposed in order to efficiently identify enhancers, learning globally contextual features is still one of the challenges for computational methods. Regarding the similarities between biological sequences and natural language sentences, the novel BERT-based language techniques have been applied to extracting complex contextual features in various computational biology tasks such as protein function/structure prediction. To speed up the research on enhancer identification, it is urgent to construct a BERT-based enhancer language model. Results: In this paper, we propose a multi-scale enhancer identification method (iEnhancer-ELM) based on enhancer language models, which treat enhancer sequences as natural language sentences that are composed of k-mer nucleotides. iEnhancer-ELM can extract contextual information of multi-scale k-mers with positions from raw enhancer sequences. Benefiting from the complementary information of k-mers in multi-scale, we ensemble four iEnhancer-ELM models for improving enhancer identification. The benchmark comparisons show that our model outperforms state-of-the-art methods. By the interpretable attention mechanism, we finds 30 biological patterns, where 40% (12/30) are verified by a widely used motif tool (STREME) and a popular dataset (JASPAR), demonstrating our model has a potential ability to reveal the biological mechanism of enhancer. Availability: The source code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn and junjie.chen.hit@gmail.com; Supplementary information: Supplementary data are available at Bioinformatics online.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
云计算技术的最新趋势有效地提高了视觉检查的应用。但是,大多数可用系统以人类的方式工作,无法为在线应用提供长期支持。为了向前迈出一步,本文概述了一个名为SSAA的自动注释系统,以一种自学的学习方式工作,以在制造自动化场景中不断进行在线视觉检查。 SSAA受益于自我监督的学习,有效地为整个生命周期建立了视觉检查应用程序。在早期阶段,仅使用无异常数据,采用了无监督的算法来处理借口任务并为以下数据生成粗标签。然后,对监督算法进行了下游任务的培训。借助用户友好的基于Web的接口,SSAA非常方便地集成和部署两个无监督和监督算法。到目前为止,SSAA系统已用于一些现实生活中的工业应用。
translated by 谷歌翻译
盲面修复是从未知的降解中恢复高质量的面部图像。由于面部图像包含丰富的上下文信息,因此我们提出了一种方法,还可以修复器,该方法探讨了完全空间的关注,以模拟上下文信息并超越了使用本地运营商的现有作品。与先前的艺术相比,还原构造器具有多种好处。首先,与以前视觉变压器(VIT)中传统的多头自我发作不同,还原构图结合了多头跨注意层,以学习损坏的查询与高质量的键值对之间的完全空间相互作用。其次,从重建为导向的高质量词典中对Resotreformer中的钥匙值对进行采样,其元素具有富含高质量的面部特征,专门针对面部重建,从而导致出色的恢复结果。第三,RestoreFormer优于一个合成数据集和三个现实世界数据集上的先进的最新方法,并且可以产生具有更好视觉质量的图像。
translated by 谷歌翻译
NL2VIS - 将自然语言(NL)查询转化为相应的可视化(VI) - 在商业可视化供应商和学术研究人员中吸引了越来越多的关注。在过去的几年里,基于高级的深度学习的模型已经实现了许多自然语言处理(NLP)任务的人类能力,这清楚地告诉我们,基于深度学习的技术是推动NL2VIS领域的好选择。但是,大禁区是缺乏大量(NL,VIS)对的基准。我们呈现NVBench,第一个大型NL2VIS基准测试,其中包含来自105个域的750个表的25,750(NL,VI)对,由(NL,SQL)基准合成,以支持跨域NL2VIS任务。NVBench的质量已被23名专家和300多名人群工人广泛验证。使用NVBench的基于深度学习的模型培训表明NVBench可以推动NL2VIS的领域。
translated by 谷歌翻译
在许多机器学习应用程序中出现了非convex-concave min-max问题,包括最大程度地减少一组非凸函数的最大程度,并对神经网络的强大对抗训练。解决此问题的一种流行方法是梯度下降(GDA)算法,不幸的是,在非凸性的情况下可以表现出振荡。在本文中,我们引入了一种“平滑”方案,该方案可以与GDA结合以稳定振荡并确保收敛到固定溶液。我们证明,稳定的GDA算法可以实现$ O(1/\ epsilon^2)$迭代复杂性,以最大程度地减少有限的非convex函数收集的最大值。此外,平滑的GDA算法达到了$ O(1/\ epsilon^4)$ toseration复杂性,用于一般的nonconvex-concave问题。提出了这种稳定的GDA算法的扩展到多块情况。据我们所知,这是第一个实现$ o(1/\ epsilon^2)$的算法,用于一类NonConvex-Concave问题。我们说明了稳定的GDA算法在健壮训练中的实际效率。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
In this paper, a semantic communication framework for image transmission is developed. In the investigated framework, a set of servers cooperatively transmit images to a set of users utilizing semantic communication techniques. To evaluate the performance of studied semantic communication system, a multimodal metric is proposed to measure the correlation between the extracted semantic information and the original image. To meet the ISS requirement of each user, each server must jointly determine the semantic information to be transmitted and the resource blocks (RBs) used for semantic information transmission. We formulate this problem as an optimization problem aiming to minimize each server's transmission latency while reaching the ISS requirement. To solve this problem, a value decomposition based entropy-maximized multi-agent reinforcement learning (RL) is proposed, which enables servers to coordinate for training and execute RB allocation in a distributed manner to approach to a globally optimal performance with less training iterations. Compared to traditional multi-agent RL, the proposed RL improves the valuable action exploration of servers and the probability of finding a globally optimal RB allocation policy based on local observation. Simulation results show that the proposed algorithm can reduce the transmission delay by up to 16.1% compared to traditional multi-agent RL.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
With the development of technology and sharing economy, Airbnb as a famous short-term rental platform, has become the first choice for many young people to select. The issue of Airbnb's pricing has always been a problem worth studying. While the previous studies achieve promising results, there are exists deficiencies to solve. Such as, (1) the feature attributes of rental are not rich enough; (2) the research on rental text information is not deep enough; (3) there are few studies on predicting the rental price combined with the point of interest(POI) around the house. To address the above challenges, we proposes a multi-source information embedding(MSIE) model to predict the rental price of Airbnb. Specifically, we first selects the statistical feature to embed the original rental data. Secondly, we generates the word feature vector and emotional score combination of three different text information to form the text feature embedding. Thirdly, we uses the points of interest(POI) around the rental house information generates a variety of spatial network graphs, and learns the embedding of the network to obtain the spatial feature embedding. Finally, this paper combines the three modules into multi source rental representations, and uses the constructed fully connected neural network to predict the price. The analysis of the experimental results shows the effectiveness of our proposed model.
translated by 谷歌翻译